191 research outputs found

    Portfolio management using partially observable Markov decision process

    Get PDF
    Portfolio theory is concerned with how an investor should divide his wealth among different securities. This problem was first formulated by Markowitz in 1952. Since then, other more sophisticated formulations have been introduced. However, practical issues like transactions costs and their effects on the portfolio choice in multiple stages have not been widely considered. In our work, we show that the portfolio management problem is appropriately formulated as a Partially Observable Markov Decision Process. We use a Monte Carlo method called "rollout" to approximate an optimal strategy for making decisions. To capture the behavior of stock prices over time, we use two well known models.2nd place, IS&T Graduate Group

    Data-graph repairs: the preferred approach

    Full text link
    Repairing inconsistent knowledge bases is a task that has been assessed, with great advances over several decades, from within the knowledge representation and reasoning and the database theory communities. As information becomes more complex and interconnected, new types of repositories, representation languages and semantics are developed in order to be able to query and reason about it. Graph databases provide an effective way to represent relationships among data, and allow processing and querying these connections efficiently. In this work, we focus on the problem of computing preferred (subset and superset) repairs for graph databases with data values, using a notion of consistency based on a set of Reg-GXPath expressions as integrity constraints. Specifically, we study the problem of computing preferred repairs based on two different preference criteria, one based on weights and the other based on multisets, showing that in most cases it is possible to retain the same computational complexity as in the case where no preference criterion is available for exploitation.Comment: arXiv admin note: text overlap with arXiv:2206.0750

    On the complexity of finding set repairs for data-graphs

    Full text link
    In the deeply interconnected world we live in, pieces of information link domains all around us. As graph databases embrace effectively relationships among data and allow processing and querying these connections efficiently, they are rapidly becoming a popular platform for storage that supports a wide range of domains and applications. As in the relational case, it is expected that data preserves a set of integrity constraints that define the semantic structure of the world it represents. When a database does not satisfy its integrity constraints, a possible approach is to search for a 'similar' database that does satisfy the constraints, also known as a repair. In this work, we study the problem of computing subset and superset repairs for graph databases with data values using a notion of consistency based on a set of Reg-GXPath expressions as integrity constraints. We show that for positive fragments of Reg-GXPath these problems admit a polynomial-time algorithm, while the full expressive power of the language renders them intractable.Comment: 35 pages , including Appendi

    An epistemic approach to model uncertainty in data-graphs

    Full text link
    Graph databases are becoming widely successful as data models that allow to effectively represent and process complex relationships among various types of data. As with any other type of data repository, graph databases may suffer from errors and discrepancies with respect to the real-world data they intend to represent. In this work we explore the notion of probabilistic unclean graph databases, previously proposed for relational databases, in order to capture the idea that the observed (unclean) graph database is actually the noisy version of a clean one that correctly models the world but that we know partially. As the factors that may be involved in the observation can be many, e.g, all different types of clerical errors or unintended transformations of the data, we assume a probabilistic model that describes the distribution over all possible ways in which the clean (uncertain) database could have been polluted. Based on this model we define two computational problems: data cleaning and probabilistic query answering and study for both of them their corresponding complexity when considering that the transformation of the database can be caused by either removing (subset) or adding (superset) nodes and edges.Comment: 25 pages, 3 figure

    The Super-Earth Opportunity - Search for Habitable Exoplanets in the 2020s

    Get PDF
    The recent discovery of a staggering diversity of planets beyond the Solar System has brought with it a greatly expanded search space for habitable worlds. The Kepler exoplanet survey has revealed that most planets in our interstellar neighborhood are larger than Earth and smaller than Neptune. Collectively termed super-Earths and mini-Neptunes, some of these planets may have the conditions to support liquid water oceans, and thus Earth-like biology, despite differing in many ways from our own planet. In addition to their quantitative abundance, super-Earths are relatively large and are thus more easily detected than true Earth twins. As a result, super-Earths represent a uniquely powerful opportunity to discover and explore a panoply of fascinating and potentially habitable planets in 2020 - 2030 and beyond.Comment: Science white paper submitted to the 2020 Astronomy and Astrophysics Decadal Surve
    corecore